Goto

Collaborating Authors

 King County


Underwater robotics expert reveals 'shipwreck city' hiding beneath major urban lake

FOX News

ROV specialist Phil Parisi is documenting nearly 100 underwater targets in Seattle's Lake Union, calling the urban lake a "shipwreck city" hiding a century of maritime history.


Two freak plays in one MLB night leaves announcers, fans stunned

FOX News

Edward Cabrera's strikeout prop is the play as struggling Phillies face surging Cubs today Nuggets vs Timberwolves Game 3 pick hinges on Jaden McDaniels calling out Denver's entire defense Charles Barkley was disgusted by Magic's highly questionable pregame handshake ChatGPT predicted the first round of the NFL Draft and here's what it said Curt Cignetti was so focused this offseason, he turned down all external requests: 'I'm 95% football' Former MLB owner claims'despicable' San Francisco Giants are the reason the A's left Oakland Longtime NASCAR crew chief tells wild story about one of the sport's biggest characters WNBA finally embraces Caitlin Clark's stardom with unprecedented national TV schedule Why are the Mets so bad? Flyers mascot Gritty pens letter to fans ahead of first playoff game... eight years after he debuted Hasan Piker justifies'social murder' of CEO Fox News celebrates'Bring Your Kids to Work Day' Trump says there's'no time frame' to secure Iran deal Iranian activist praises Trump's intervention after female protesters saved from execution Steve Hilton praised for'offering solutions' in CA gubernatorial debate Middle East tensions escalate over US blockade, Iran's actions We had a homer land on top of the foul pole, and a line drive land in a pitcher's shirt Jo Adell just pulled off something you may NEVER see again -- robbing THREE home runs in a single game vs the Mariners. Is this the greatest defensive performance in MLB history? Ricky Cobb reacts like only the Super 70s Sports Guy can .... All eyes are on today's NFL Draft, but I doubt it'll produce anything like what Major League Baseball gave us Wednesday night.


Palantir Employees Are Starting to Wonder if They're the Bad Guys

WIRED

Palantir Employees Are Starting to Wonder if They're the Bad Guys Interviews with current and former Palantir employees, along with internal Slack messages obtained by WIRED, suggest a workforce in turmoil. It took just a few months of President Donald Trump's second term for Palantir employees to question their company's commitments to civil liberties . Last fall, Palantir seemed to become the technological backbone of Trump's immigration enforcement machinery, providing software identifying, tracking, and helping deport immigrants on behalf of the Department of Homeland Security (DHS), when current and former employees started ringing the alarm. Right as they picked up the call, one of them asked, "Are you tracking Palantir's descent into fascism?" "That was their greeting," the other former employee says.


Decentralized Machine Learning with Centralized Performance Guarantees via Gibbs Algorithms

Bermudez, Yaiza, Perlaza, Samir, Esnaola, Iñaki

arXiv.org Machine Learning

In this paper, it is shown, for the first time, that centralized performance is achievable in decentralized learning without sharing the local datasets. Specifically, when clients adopt an empirical risk minimization with relative-entropy regularization (ERM-RER) learning framework and a forward-backward communication between clients is established, it suffices to share the locally obtained Gibbs measures to achieve the same performance as that of a centralized ERM-RER with access to all the datasets. The core idea is that the Gibbs measure produced by client~$k$ is used, as reference measure, by client~$k+1$. This effectively establishes a principled way to encode prior information through a reference measure. In particular, achieving centralized performance in the decentralized setting requires a specific scaling of the regularization factors with the local sample sizes. Overall, this result opens the door to novel decentralized learning paradigms that shift the collaboration strategy from sharing data to sharing the local inductive bias via the reference measures over the set of models.


Enhancing AI and Dynamical Subseasonal Forecasts with Probabilistic Bias Correction

Guan, Hannah, Mouatadid, Soukayna, Orenstein, Paulo, Cohen, Judah, Dong, Haiyu, Ni, Zekun, Berman, Jeremy, Flaspohler, Genevieve, Lu, Alex, Schloer, Jakob, Talib, Joshua, Weyn, Jonathan A., Mackey, Lester

arXiv.org Machine Learning

Decision-makers rely on weather forecasts to plant crops, manage wildfires, allocate water and energy, and prepare for weather extremes. Today, such forecasts enjoy unprecedented accuracy out to two weeks thanks to steady advances in physics-based dynamical models and data-driven artificial intelligence (AI) models. However, model skill drops precipitously at subseasonal timescales (2 - 6 weeks ahead), due to compounding errors and persistent biases. To counter this degradation, we introduce probabilistic bias correction (PBC), a machine learning framework that substantially reduces systematic error by learning to correct historical probabilistic forecasts. When applied to the leading dynamical and AI models from the European Centre for Medium-Range Weather Forecasts (ECMWF), PBC doubles the subseasonal skill of the AI Forecasting System and improves the skill of the operationally-debiased dynamical model for 91% of pressure, 92% of temperature, and 98% of precipitation targets. We designed PBC for operational deployment, and, in ECMWF's 2025 real-time forecasting competition, its global forecasts placed first for all weather variables and lead times, outperforming the dynamical models from six operational forecasting centers, an international dynamical multi-model ensemble, ECMWF's AI Forecasting System, and the forecasting systems of 34 teams worldwide. These probabilistic skill gains translate into more accurate prediction of extreme events and have the potential to improve agricultural planning, energy management, and disaster preparedness in vulnerable communities.


From Ground Truth to Measurement: A Statistical Framework for Human Labeling

Chew, Robert, Eckman, Stephanie, Kern, Christoph, Kreuter, Frauke

arXiv.org Machine Learning

Supervised machine learning assumes that labeled data provide accurate measurements of the concepts models are meant to learn. Yet in practice, human labeling introduces systematic variation arising from ambiguous items, divergent interpretations, and simple mistakes. Machine learning research commonly treats all disagreement as noise, which obscures these distinctions and limits our understanding of what models actually learn. This paper reframes annotation as a measurement process and introduces a statistical framework for decomposing labeling outcomes into interpretable sources of variation: instance difficulty, annotator bias, situational noise, and relational alignment. The framework extends classical measurement-error models to accommodate both shared and individualized notions of truth, reflecting traditional and human label variation interpretations of error, and provides a diagnostic for assessing which regime better characterizes a given task. Applying the proposed model to a multi-annotator natural language inference dataset, we find empirical evidence for all four theorized components and demonstrate the effectiveness of our approach. We conclude with implications for data-centric machine learning and outline how this approach can guide the development of a more systematic science of labeling.


Privacy-Accuracy Trade-offs in High-Dimensional LASSO under Perturbation Mechanisms

Sakata, Ayaka, Tanzawa, Haruka

arXiv.org Machine Learning

We study privacy-preserving sparse linear regression in the high-dimensional regime, focusing on the LASSO estimator. We analyze two widely used mechanisms for differential privacy: output perturbation, which injects noise into the estimator, and objective perturbation, which adds a random linear term to the loss function. Using approximate message passing (AMP), we characterize the typical behavior of these estimators under random design and privacy noise. To quantify privacy, we adopt typical-case measures, including the on-average KL divergence, which admits a hypothesis-testing interpretation in terms of distinguishability between neighboring datasets. Our analysis reveals that sparsity plays a central role in shaping the privacy-accuracy trade-off: stronger regularization can improve privacy by stabilizing the estimator against single-point data changes. We further show that the two mechanisms exhibit qualitatively different behaviors. In particular, for objective perturbation, increasing the noise level can have non-monotonic effects, and excessive noise may destabilize the estimator, leading to increased sensitivity to data perturbations. Our results demonstrate that AMP provides a powerful framework for analyzing privacy-accuracy trade-offs in high-dimensional sparse models.


Gradient Descent with Projection Finds Over-Parameterized Neural Networks for Learning Low-Degree Polynomials with Nearly Minimax Optimal Rate

Yang, Yingzhen, Li, Ping

arXiv.org Machine Learning

We study the problem of learning a low-degree spherical polynomial of degree $k_0 = Θ(1) \ge 1$ defined on the unit sphere in $\RR^d$ by training an over-parameterized two-layer neural network with augmented feature in this paper. Our main result is the significantly improved sample complexity for learning such low-degree polynomials. We show that, for any regression risk $\eps \in (0, Θ(d^{-k_0})]$, an over-parameterized two-layer neural network trained by a novel Gradient Descent with Projection (GDP) requires a sample complexity of $n \asymp Θ( \log(4/δ) \cdot d^{k_0}/\eps)$ with probability $1-δ$ for $δ\in (0,1)$, in contrast with the representative sample complexity $Θ(d^{k_0} \max\set{\eps^{-2},\log d})$. Moreover, such sample complexity is nearly unimprovable since the trained network renders a nearly optimal rate of the nonparametric regression risk of the order $\log({4}/δ) \cdot Θ(d^{k_0}/{n})$ with probability at least $1-δ$. On the other hand, the minimax optimal rate for the regression risk with a kernel of rank $Θ(d^{k_0})$ is $Θ(d^{k_0}/{n})$, so that the rate of the nonparametric regression risk of the network trained by GDP is nearly minimax optimal. In the case that the ground truth degree $k_0$ is unknown, we present a novel and provable adaptive degree selection algorithm which identifies the true degree and achieves the same nearly optimal regression rate. To the best of our knowledge, this is the first time that a nearly optimal risk bound is obtained by training an over-parameterized neural network with a popular activation function (ReLU) and algorithmic guarantee for learning low-degree spherical polynomials. Due to the feature learning capability of GDP, our results are beyond the regular Neural Tangent Kernel (NTK) limit.


Learning Sparse Gaussian Graphical Models with Overlapping Blocks

Seyed Mohammad Javad Hosseini, Su-In Lee

Neural Information Processing Systems

Second, GRAB blocks (Figa priorioruseasequential fixed. Thefirsttwo terms,logdet ( ) trace (S ), in Eq (3) correspondtologP(X| ), thelog-likelihoodof GGM givenaparticularparameter (i.e., anestimateof 1), asdescribedin Section 2.1.


How debit card fraud can happen without using the card

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .